skip to main content


Search for: All records

Creators/Authors contains: "Ott, Jordan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The interiors of neutron stars reach densities and temperatures beyond the limits of terrestrial experiments, providing vital laboratories for probing nuclear physics. While the star's interior is not directly observable, its pressure and density determine the star's macroscopic structure which affects the spectra observed in telescopes. The relationship between the observations and the internal state is complex and partially intractable, presenting difficulties for inference. Previous work has focused on the regression from stellar spectra of parameters describing the internal state. We demonstrate a calculation of the full likelihood of the internal state parameters given observations, accomplished by replacing intractable elements with machine learning models trained on samples of simulated stars. Our machine-learning-derived likelihood allows us to performmaximum a posterioriestimation of the parameters of interest, as well as full scans. We demonstrate the technique by inferring stellar mass and radius from an individual stellar spectrum, as well as equation of state parameters from a set of spectra. Our results are more precise than pure regression models, reducing the width of the parameter residuals by 11.8% in the most realistic scenario. The neural networks will be released as a tool for fast simulation of neutron star properties and observed spectra. 
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  2. Abstract Neutron stars provide a unique laboratory for studying matter at extreme pressures and densities. While there is no direct way to explore their interior structure, X-rays emitted from these stars can indirectly provide clues to the equation of state (EOS) of the superdense nuclear matter through the inference of the star's mass and radius. However, inference of EOS directly from a star's X-ray spectra is extremely challenging and is complicated by systematic uncertainties. The current state of the art is to use simulation-based likelihoods in a piece-wise method which relies on certain theoretical assumptions and simplifications about the uncertainties. It first infers the star's mass and radius to reduce the dimensionality of the problem, and from those quantities infer the EOS. We demonstrate a series of enhancements to the state of the art, in terms of realistic uncertainty quantification and a path towards circumventing the need for theoretical assumptions to infer physical properties with machine learning. We also demonstrate novel inference of the EOS directly from the high-dimensional spectra of observed stars, avoiding the intermediate mass-radius step. Our network is conditioned on the sources of uncertainty of each star, allowing for natural and complete propagation of uncertainties to the EOS. 
    more » « less
  3. Abstract We report a method for the phase reconstruction of an ultrashort laser pulse based on the deep learning of the nonlinear spectral changes induce by self-phase modulation. The neural networks were trained on simulated pulses with random initial phases and spectra, with pulse durations between 8.5 and 65 fs. The reconstruction is valid with moderate spectral resolution, and is robust to noise. The method was validated on experimental data produced from an ultrafast laser system, where near real-time phase reconstructions were performed. This method can be used in systems with known linear and nonlinear responses, even when the fluence is not known, making this method ideal for difficult to measure beams such as the high energy, large aperture beams produced in petawatt systems. 
    more » « less
  4. null (Ed.)
  5. Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-to-use deep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide autodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a neural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects are written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a software library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are plentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network emulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and radiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the model’s emergent behavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a previously unrecognized strong relationship between offline validation error and online performance, in which the choice of the optimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable improvements in climate model stability including some with reduced error, for an especially challenging training dataset. 
    more » « less
  6. Abstract

    We explore the potential of feed‐forward deep neural networks (DNNs) for emulating cloud superparameterization in realistic geography, using offline fits to data from the superparameterized community atmospheric model. To identify the network architecture of greatest skill, we formally optimize hyperparameters using ∼250 trials. Our DNN explains over 70% of the temporal variance at the 15‐min sampling scale throughout the mid‐to‐upper troposphere. Autocorrelation timescale analysis compared against DNN skill suggests the less good fit in the tropical, marine boundary layer is driven by neural network difficulty emulating fast, stochastic signals in convection. However, spectral analysis in the temporal domain indicates skillful emulation of signals on diurnal to synoptic scales. A closer look at the diurnal cycle reveals correct emulation of land‐sea contrasts and vertical structure in the heating and moistening fields, but some distortion of precipitation. Sensitivity tests targeting precipitation skill reveal complementary effects of adding positive constraints versus hyperparameter tuning, motivating the use of both in the future. A first attempt to force an offline land model with DNN emulated atmospheric fields produces reassuring results further supporting neural network emulation viability in real‐geography settings. Overall, the fit skill is competitive with recent attempts by sophisticated Residual and Convolutional Neural Network architectures trained on added information, including memory of past states. Our results confirm the parameterizability of superparameterized convection with continents through machine learning and we highlight the advantages of casting this problem locally in space and time for accurate emulation and hopefully quick implementation of hybrid climate models.

     
    more » « less